Goto

Collaborating Authors

 Vacaville


Trial-Based Dominance Enables Non-Parametric Tests to Compare both the Speed and Accuracy of Stochastic Optimizers

Price, Kenneth V., Kumar, Abhishek, Suganthan, Ponnuthurai N

arXiv.org Artificial Intelligence

Non-parametric tests can determine the better of two stochastic optimization algorithms when benchmarking results are ordinal, like the final fitness values of multiple trials. For many benchmarks, however, a trial can also terminate once it reaches a pre-specified target value. When only some trials reach the target value, two variables characterize a trial's outcome: the time it takes to reach the target value (or not) and its final fitness value. This paper describes a simple way to impose linear order on this two-variable trial data set so that traditional non-parametric methods can determine the better algorithm when neither dominates. We illustrate the method with the Mann-Whitney U-test. A simulation demonstrates that U-scores are much more effective than dominance when tasked with identifying the better of two algorithms. We test U-scores by having them determine the winners of the CEC 2022 Special Session and Competition on Real-Parameter Numerical Optimization.


Artificial intelligence could revive the art of medicine

#artificialintelligence

Doctors practice medicine to deliver care, not do data entry. Yet in the era of electronic medical records (EMRs), for every hour spent with a patient, physicians spend nearly two hours on paperwork. What if technology could take care of the paperwork for us? Record-keeping systems in health care were built for back-office functions, not bedside medicine. Most EMR vendors started out building products to collect payments and schedule appointments.


Neural Wikipedian: Generating Textual Summaries from Knowledge Base Triples

Vougiouklis, Pavlos, Elsahar, Hady, Kaffee, Lucie-Aimée, Gravier, Christoph, Laforest, Frederique, Hare, Jonathon, Simperl, Elena

arXiv.org Artificial Intelligence

Most people do not interact with Semantic Web data directly. Unless they have the expertise to understand the underlying technology, they need textual or visual interfaces to help them make sense of it. We explore the problem of generating natural language summaries for Semantic Web data. This is non-trivial, especially in an open-domain context. To address this problem, we explore the use of neural networks. Our system encodes the information from a set of triples into a vector of fixed dimensionality and generates a textual summary by conditioning the output on the encoded vector. We train and evaluate our models on two corpora of loosely aligned Wikipedia snippets and DBpedia and Wikidata triples with promising results.